Image and video synthesis has become a blooming topic in computer vision and machine learning communities along with the developments of deep generative models, due to its great academic and application value. Many researchers have been devoted to synthesizing high-fidelity human images as one of the most commonly seen object categories in daily lives, where a large number of studies are performed based on various deep generative models, task settings and applications. Thus, it is necessary to give a comprehensive overview on these variant methods on human image generation. In this paper, we divide human image generation techniques into three paradigms, i.e., data-driven methods, knowledge-guided methods and hybrid methods. For each route, the most representative models and the corresponding variants are presented, where the advantages and characteristics of different methods are summarized in terms of model architectures and input/output requirements. Besides, the main public human image datasets and evaluation metrics in the literature are also summarized. Furthermore, due to the wide application potentials, two typical downstream usages of synthesized human images are covered, i.e., data augmentation for person recognition tasks and virtual try-on for fashion customers. Finally, we discuss the challenges and potential directions of human image generation to shed light on future research.
translated by 谷歌翻译
Existing approaches for vision-and-language navigation (VLN) are mainly based on cross-modal reasoning over discrete views. However, this scheme may hamper an agent's spatial and numerical reasoning because of incomplete objects within a single view and duplicate observations across views. A potential solution is mapping discrete views into a unified birds's-eye view, which can aggregate partial and duplicate observations. Existing metric maps could achieve this goal, but they suffer from less expressive semantics (e.g. usually predefined labels) and limited map size, which weakens an agent's language grounding and long-term planning ability. Inspired by the robotics community, we introduce hybrid topo-metric maps into VLN, where a topological map is used for long-term planning and a metric map for short-term reasoning. Beyond mapping with more expressive deep features, we further design a pre-training framework via the hybrid map to learn language-informed map representations, which enhances cross-modal grounding and facilitates the final language-guided navigation goal. Extensive experiments demonstrate the effectiveness of the map-based route for VLN, and the proposed method sets the new state-of-the-art on three VLN benchmarks.
translated by 谷歌翻译
Text-driven person image generation is an emerging and challenging task in cross-modality image generation. Controllable person image generation promotes a wide range of applications such as digital human interaction and virtual try-on. However, previous methods mostly employ single-modality information as the prior condition (e.g. pose-guided person image generation), or utilize the preset words for text-driven human synthesis. Introducing a sentence composed of free words with an editable semantic pose map to describe person appearance is a more user-friendly way. In this paper, we propose HumanDiffusion, a coarse-to-fine alignment diffusion framework, for text-driven person image generation. Specifically, two collaborative modules are proposed, the Stylized Memory Retrieval (SMR) module for fine-grained feature distillation in data processing and the Multi-scale Cross-modality Alignment (MCA) module for coarse-to-fine feature alignment in diffusion. These two modules guarantee the alignment quality of the text and image, from image-level to feature-level, from low-resolution to high-resolution. As a result, HumanDiffusion realizes open-vocabulary person image generation with desired semantic poses. Extensive experiments conducted on DeepFashion demonstrate the superiority of our method compared with previous approaches. Moreover, better results could be obtained for complicated person images with various details and uncommon poses.
translated by 谷歌翻译
很少有射击学习(FSL)旨在通过利用基本数据集的先验知识来识别只有几个支持样本的新奇查询。在本文中,我们考虑了FSL中的域移位问题,并旨在解决支持集和查询集之间的域间隙。不同于以前考虑基础和新颖类之间的域移位的跨域FSL工作(CD-FSL),新问题称为跨域跨集FSL(CDSC-FSL),不仅需要很少的学习者适应新的领域,但也要在每个新颖类中的不同领域之间保持一致。为此,我们提出了一种新颖的方法,即Stabpa,学习原型紧凑和跨域对准表示,以便可以同时解决域的转移和很少的学习学习。我们对分别从域和办公室数据集构建的两个新的CDCS-FSL基准进行评估。值得注意的是,我们的方法的表现优于多个详细的基线,例如,在域内,将5-shot精度提高了6.0点。代码可从https://github.com/wentaochen0813/cdcs-fsl获得
translated by 谷歌翻译
属性偏斜阻碍了当前的联合学习(FL)框架,从客户之间的一致优化方向上,这不可避免地导致降低性能和不稳定的收敛性。核心问题在于:1)域特异性属性,这些属性是非因果的,只有本地有效的,被不可自发地混合到全局聚合中。 2)纠缠属性的一阶段优化不能同时满足两个相互矛盾的目标,即概括和个性化。为了应对这些问题,我们提出了裁定的分解联合学习(DFL),以将特定于域的特定和跨不变属性拆分为两个互补分支,这些分支由提议的交替局部 - 全球优化培训。重要的是,融合分析证明,即使不完整的客户模型参与了全局聚合,FL系统也可以稳定地融合,从而大大扩展了FL的应用程序范围。广泛的实验证明,与手动合成和逼真的属性相比,与SOTA FL方法相比,DFL具有更高的性能,更好的可解释性和更快的收敛速率,可以促进FL。
translated by 谷歌翻译
近年来,由于其在图像生成过程中的可控性,有条件的图像合成引起了不断的关注。虽然最近的作品取得了现实的结果,但大多数都没有处理细微细节的细粒度风格。为了解决这个问题,提出了一种名为DRAN的新型归一化模块。它学会了细粒度的风格表示,同时保持普通风格的稳健性。具体来说,我们首先引入多级结构,空间感知金字塔汇集,以指导模型学习粗略的功能。然后,为了自适应地保险熔断不同的款式,我们提出动态门控,使得可以根据不同的空间区域选择不同的样式。为了评估DRAN的有效性和泛化能力,我们对化妆和语义图像合成进行了一组实验。定量和定性实验表明,配备了DRAN,基线模型能够实现复杂风格转移和纹理细节重建的显着改善。
translated by 谷歌翻译
更广泛的人重新识别(Reid)在最近的计算机视觉社区中引起了不断的关注。在这项工作中,我们在身份标签,特定特定因素(衣服/鞋子颜色等)和域特定因素(背景,观点等)之间构建结构因果模型。根据因果分析,我们提出了一种新颖的域不变表示,以获得概括的人重新识别(DIR-REID)框架。具体而言,我们首先建议解散特定于特定的和域特定的特征空间,我们提出了一种有效的算法实现,用于后台调整,基本上是朝向SCM的因果干预。已经进行了广泛的实验,表明Dir-Reid在大规模域泛化Reid基准上表现出最先进的方法。
translated by 谷歌翻译
在对象检测中,边界框回归(BBR)是决定对象定位性能的关键步骤。但是,我们发现BBR的大多数先前的损失功能都有两个主要缺点:(i)$ \ ell_n $ -norm和IOU基于IOU的损失功能都无法效率地描述BBR的目标,这会导致收敛速度缓慢和不准确的回归结果。 。 (ii)大多数损失函数都忽略了BBR中的不平衡问题,即与目标盒有较小重叠的大量锚盒对BBR的优化有最大的影响。为了减轻造成的不利影响,我们进行了彻底的研究,以利用本文中BBR损失的潜力。首先,提出了有关联合(EIOU)损失的有效交集,该交集明确测量了BBR中三个几何因素的差异,即重叠面积,中心点和侧面长度。之后,我们说明有效的示例挖掘(EEM)问题,并提出了焦点损失的回归版本,以使回归过程集中在高质量的锚点上。最后,将上述两个部分组合在一起以获得新的损失函数,即焦点损失。对合成数据集和真实数据集进行了广泛的实验。与其他BBR损失相比,在收敛速度和定位精度上都可以显着优势。
translated by 谷歌翻译
Although many studies have successfully applied transfer learning to medical image segmentation, very few of them have investigated the selection strategy when multiple source tasks are available for transfer. In this paper, we propose a prior knowledge guided and transferability based framework to select the best source tasks among a collection of brain image segmentation tasks, to improve the transfer learning performance on the given target task. The framework consists of modality analysis, RoI (region of interest) analysis, and transferability estimation, such that the source task selection can be refined step by step. Specifically, we adapt the state-of-the-art analytical transferability estimation metrics to medical image segmentation tasks and further show that their performance can be significantly boosted by filtering candidate source tasks based on modality and RoI characteristics. Our experiments on brain matter, brain tumor, and white matter hyperintensities segmentation datasets reveal that transferring from different tasks under the same modality is often more successful than transferring from the same task under different modalities. Furthermore, within the same modality, transferring from the source task that has stronger RoI shape similarity with the target task can significantly improve the final transfer performance. And such similarity can be captured using the Structural Similarity index in the label space.
translated by 谷歌翻译
Modern deep neural networks have achieved superhuman performance in tasks from image classification to game play. Surprisingly, these various complex systems with massive amounts of parameters exhibit the same remarkable structural properties in their last-layer features and classifiers across canonical datasets. This phenomenon is known as "Neural Collapse," and it was discovered empirically by Papyan et al. \cite{Papyan20}. Recent papers have theoretically shown the global solutions to the training network problem under a simplified "unconstrained feature model" exhibiting this phenomenon. We take a step further and prove the Neural Collapse occurrence for deep linear network for the popular mean squared error (MSE) and cross entropy (CE) loss. Furthermore, we extend our research to imbalanced data for MSE loss and present the first geometric analysis for Neural Collapse under this setting.
translated by 谷歌翻译